Extending Continuous Time Bayesian Networks for Parametric Distributions

نویسندگان

  • Logan Perreault
  • Monica Thornton
  • Rollie Goodman
  • John W. Sheppard
چکیده

The use of phase-type distributions has been suggested as a way to extend the representational power of the continuous time Bayesian network framework beyond exponentiallydistributed state transitions. However, much of the discussion has focused on approximating a distribution that is learned from available data. This method is inadequate for applications where there is not sufficient data to represent a distribution. In this paper, we suggest a method for learning phase-type distributions from known parametric distributions. We find that by minimizing a modified KL-divergence value, we are able to obtain good phasetype approximations for a variety of parametric distributions. In addition, we investigate the effects of using varying numbers of phases. Finally, we propose and evaluate an extension that uses informed starting locations for the optimization process rather than random initialization.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Expectation Maximization and Complex Duration Distributions for Continuous Time Bayesian Networks

Continuous time Bayesian networks (CTBNs) describe structured stochastic processes with finitely many states that evolve over continuous time. A CTBN is a directed (possibly cyclic) dependency graph over a set of variables, each of which represents a finite state continuous time Markov process whose transition model is a function of its parents. We address the problem of learning the parameters...

متن کامل

Extending Continuous Time Bayesian Networks

Continuous-time Bayesian networks (CTBNs) (Nodelman, Shelton, & Koller 2002; 2003), are an elegant modeling language for structured stochastic processes that evolve over continuous time. The CTBN framework is based on homogeneous Markov processes, and defines two distributions with respect to each local variable in the system, given its parents: an exponential distribution over when the variabl...

متن کامل

Introducing of Dirichlet process prior in the Nonparametric Bayesian models frame work

Statistical models are utilized to learn about the mechanism that the data are generating from it. Often it is assumed that the random variables y_i,i=1,…,n ,are samples from the probability distribution F which is belong to a parametric distributions class. However, in practice, a parametric model may be inappropriate to describe the data. In this settings, the parametric assumption could be r...

متن کامل

"Ideal Parent" Structure Learning for Continuous Variable Bayesian Networks

Bayesian networks in general, and continuous variable networks in particular, have become increasingly popular in recent years, largely due to advances in methods that facilitate automatic learning from data. Yet, despite these advances, the key task of learning the structure of such models remains a computationally intensive procedure, which limits most applications to parameter learning. This...

متن کامل

"Ideal Parent" Structure Learning for Continuous Variable Networks

In recent years, there is a growing interest in learning Bayesian networks with continuous variables. Learning the structure of such networks is a computationally expensive procedure, which limits most applications to parameter learning. This problem is even more acute when learning networks with hidden variables. We present a general method for significantly speeding the structure search algor...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2015